10 research outputs found

    Activity Recognition and Prediction in Real Homes

    Full text link
    In this paper, we present work in progress on activity recognition and prediction in real homes using either binary sensor data or depth video data. We present our field trial and set-up for collecting and storing the data, our methods, and our current results. We compare the accuracy of predicting the next binary sensor event using probabilistic methods and Long Short-Term Memory (LSTM) networks, include the time information to improve prediction accuracy, as well as predict both the next sensor event and its mean time of occurrence using one LSTM model. We investigate transfer learning between apartments and show that it is possible to pre-train the model with data from other apartments and achieve good accuracy in a new apartment straight away. In addition, we present preliminary results from activity recognition using low-resolution depth video data from seven apartments, and classify four activities - no movement, standing up, sitting down, and TV interaction - by using a relatively simple processing method where we apply an Infinite Impulse Response (IIR) filter to extract movements from the frames prior to feeding them to a convolutional LSTM network for the classification.Comment: 12 pages, Symposium of the Norwegian AI Society NAIS 201

    CPT+: Decreasing the time/space complexity of the Compact Prediction Tree

    Full text link
    Predicting next items of sequences of symbols has many applications in a wide range of domains. Several sequence prediction models have been proposed such as DG, All-k-order markov and PPM. Recently, a model named Compact Prediction Tree (CPT) has been proposed. It relies on a tree structure and a more complex prediction algorithm to offer considerably more accurate predictions than many state-of-the-art prediction models. However, an important limitation of CPT is its high time and space complexity. In this article, we address this issue by proposing three novel strategies to reduce CPT’s size and prediction time, and increase its accuracy. Experimental results on seven real life datasets show that the resulting model (CPT+) is up to 98 times more compact and 4.5 times faster than CPT, and has the best overall accuracy when compared to six state-of-the-art models from the literature: All-K-order Markov, CPT, DG, Lz78, PPM and TDAG

    PutMode: Prediction of uncertain trajectories in moving objects databases

    No full text
    Objective: Prediction of moving objects with uncertain motion patterns is emerging rapidly as a new exciting paradigm and is important for law enforcement applications such as criminal tracking analysis. However, existing algorithms for prediction in spatio-temporal databases focus on discovering frequent trajectory patterns from historical data. Moreover, these methods overlook the effect of some important factors, such as speed and moving direction. This lacks generality as moving objects may follow dynamic motion patterns in real life. Methods: We propose a framework for predicating uncertain trajectories in moving objects databases. Based on Continuous Time Bayesian Networks (CTBNs), we develop a trajectory prediction algorithm, called PutMode (Prediction of uncertain trajectories in Moving objects databases). It comprises three phases: (i) construction of TCTBNs (Trajectory CTBNs) which obey the Markov property and consist of states combined by three important variables including street identifier, speed, and direction; (ii) trajectory clustering for clearing up outlying trajectories; (iii) predicting the motion behaviors of moving objects in order to obtain the possible trajectories based on TCTBNs. Results: Experimental results show that PutMode can predict the possible motion curves of objects in an accurate and efficient manner in distinct trajectory data sets with an average accuracy higher than 80%. Furthermore, we illustrate the crucial role of trajectory clustering, which provides benefits on prediction time as well as prediction accuracy. © 2009 Springer Science+Business Media, LLC.link_to_subscribed_fulltex
    corecore